76 research outputs found

    An Unsupervised Algorithm for Change Detection in Hyperspectral Remote Sensing Data Using Synthetically Fused Images and Derivative Spectral Profiles

    Get PDF
    Multitemporal hyperspectral remote sensing data have the potential to detect altered areas on the earth’s surface. However, dissimilar radiometric and geometric properties between the multitemporal data due to the acquisition time or position of the sensors should be resolved to enable hyperspectral imagery for detecting changes in natural and human-impacted areas. In addition, data noise in the hyperspectral imagery spectrum decreases the change-detection accuracy when general change-detection algorithms are applied to hyperspectral images. To address these problems, we present an unsupervised change-detection algorithm based on statistical analyses of spectral profiles; the profiles are generated from a synthetic image fusion method for multitemporal hyperspectral images. This method aims to minimize the noise between the spectra corresponding to the locations of identical positions by increasing the change-detection rate and decreasing the false-alarm rate without reducing the dimensionality of the original hyperspectral data. Using a quantitative comparison of an actual dataset acquired by airborne hyperspectral sensors, we demonstrate that the proposed method provides superb change-detection results relative to the state-of-the-art unsupervised change-detection algorithms

    Quantum Separability of the vacuum for Scalar Fields with a Boundary

    Full text link
    Using the Green's function approach we investigate separability of the vacuum state of a massless scalar field with a single Dirichlet boundary. Separability is demonstrated using the positive partial transpose criterion for effective two-mode Gaussian states of collective operators. In contrast to the vacuum energy, entanglement of the vacuum is not modified by the presence of the boundary.Comment: 4 pages, 1 figure, Revtex, minor corrections. submitted to Phy. Rev.

    DRDT: Distributed and Reliable Data Transmission with Cooperative Nodes for Lossy Wireless Sensor Networks

    Get PDF
    Recent studies have shown that in realistic wireless sensor network environments links are extremely unreliable. To recover from corrupted packets, most routing schemes with an assumption of ideal radio environments use a retransmission mechanism, which may cause unnecessary retransmissions. Therefore, guaranteeing energy-efficient reliable data transmission is a fundamental routing issue in wireless sensor networks. However, it is not encouraged to propose a new reliable routing scheme in the sense that every existing routing scheme cannot be replaced with the new one. This paper proposes a Distributed and Reliable Data Transmission (DRDT) scheme with a goal to efficiently guarantee reliable data transmission. In particular, this is based on a pluggable modular approach so that it can be extended to existing routing schemes. DRDT offers reliable data transmission using neighbor nodes, i.e., helper nodes. A helper node is selected among the neighbor nodes of the receiver node which overhear the data packet in a distributed manner. DRDT effectively reduces the number of retransmissions by delegating the retransmission task from the sender node to the helper node that has higher link quality to the receiver node when the data packet reception fails due to the low link quality between the sender and the receiver nodes. Comprehensive simulation results show that DRDT improves end-to-end transmission cost by up to about 45% and reduces its delay by about 40% compared to existing schemes

    Alterations of the Foveal Avascular Zone Measured by Optical Coherence Tomography Angiography in Glaucoma Patients With Central Visual Field Defects

    Get PDF
    Citation: Kwon J, Choi J, Shin JW, Lee J, Kook MS. Alterations of the foveal avascular zone measured by optical coherence tomography angiography in glaucoma patients with central visual field defects. Invest Ophthalmol Vis Sci. 2017;58:163758: -164558: . DOI: 10.1167 PURPOSE. To investigate whether the area and shape of the foveal avascular zone (FAZ) as assessed by optical coherence tomography angiography (OCTA) are altered in glaucomatous eyes with central visual field defects (CVFDs). METHODS. A total of 78 patients with open-angle glaucoma with central or peripheral visual field defects (PVFDs) confined to a single hemifield were studied retrospectively. Foveal avascular zone area and circularity were measured using OCTA images from the superficial retinal layer. Central retinal visual field (VF) sensitivity using Swedish Interactive Threshold Algorithm 24-2 VF and macular ganglion cell-inner plexiform layer (mGCIPL) thickness were measured. The FAZ area between VF-affected hemimacular segments and VF-unaffected hemimacular segments in eyes with CVFDs and matched hemimacular segments of eyes with PVFDs were compared. Factors associated with the presence and severity of CVFD at initial presentation were determined. RESULTS. Eyes with CVFDs showed a significantly larger FAZ area, lower FAZ circularity, and lower mGCIPL thickness than the PVFD group. The mean hemi-FAZ area of VF-affected hemimaculas in eyes with CVFDs was significantly larger than that of the PVFD group (0.256 6 0.07 mm 2 vs. 0.184 6 0.07 mm 2 ) and the VF-unaffected hemimaculas of the CVFD group (0.179 6 0.06 mm 2 ; P < 0.05). Age, mean deviation, mGCIPL thickness, FAZ area, and circularity were associated with CVFDs (P < 0.05). CONCLUSIONS. Microcirculatory alterations in the perifovea are spatially correlated with central VF loss. Loss of FAZ circularity was significantly associated with presence of CVFD, whereas FAZ area was significantly associated with severity of CVFD. Keywords: foveal avascular zone, central visual field defects, optical coherence tomography angiography G laucoma is a leading cause of irreversible blindness and is characterized by progressive retinal ganglion cell (RGC) death and axonal loss. 1,2 Ocular blood flow (OBF) impairment and/or abnormal microcirculation along with elevated IOP may play an important role in glaucoma, particularly in normaltension glaucoma (NTG) 9,10 Optical coherence tomography angiography is a technique that uses differences between B-scans to generate contrasts associated with motion; in particular, the motion of blood cells through the vasculature. It identifies temporal changes in a specific location and recognizes them as erythrocyte motion. OCTA obtains detailed images of the macular microvascular networks with a high resolution and in a reproducible manner. 11 The foveal avascular zone (FAZ) is the round capillary-free zone within the macula. Manual measurement of FAZ area based on OCTA images at the superficial vascular network is a noninvasive, simple, and useful method for quantifying FAZ dimensions and architecture. 12,18-22 Foveal avascular zone circularity (roundness of the FAZ border) can also help characterize FAZ architecture, which can be reduced by vascular diseases in the macula, such as DR or retinal vein occlusion (RVO)

    OBJECT-BASED CLASSIFICATION OF AN URBAN AREA THROUGH A COMBINATION OF AERIAL IMAGE AND AIRBORNE LIDAR DATA

    Get PDF
    ABSTRACT This paper studies the effect of airborne elevation information on the classification of an aerial image in an urban area. In an urban area, it is difficult to classify buildings relying solely on the spectral information obtained from aerial images because urban buildings possess a variety of roof colors. Therefore, combining Lidar data with aerial images overcomes the difficulties encountered with regard to the heterogeneous appearance of buildings. In the first stage of this process, building information is obtained and is extracted using the normalized Digital Surface Model, return information derived from the airborne Lidar data, and vegetation information obtained through preclassification. In the second stage of this process, the aerial image is segmented into objects. It is then overlaid with building information extracted from the first step in the process. By applying the definite rule to the resulting image, it is possible to determine whether or not the object is a building. In the final stage, the aerial image is classified by using the building object as ancillary data extracted from the prior stage. This classification procedure uses elevation and intensity information obtained from the Lidar data, as well as the red, green, and blue bands obtained from the aerial image. As a result, a method using the combination of an aerial image and the airborne Lidar data shows higher accuracy and improved classification, especially with regard to building objects, than results that rely solely on an aerial image
    corecore